skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Rivera, Sergio"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    The high penetration of renewable sources of energy in electrical power systems implies an increase in the uncertainty variables of the economic dispatch (ED). Uncertainty costs are a metric to quantify the variability introduced from renewable energy generation, that is to say: wind energy generation (WEG), run-of-the-river hydro generators (RHG), and solar photovoltaic generation (PVG). On other side, there are associated uncertainties to the charge/uncharge of plug-in electric vehicles (PEV). Thus, in this paper, the uncertainty cost functions (UCF) and their marginal expressions as a way of modeling and assessment of stochasticity in power systems with high penetration of smart grids elements is presented. In this work, a mathematical analysis is presented using the first and second derivatives of the UCF, where the marginal uncertainty cost functions (MUCF) and the UCF’s minimums for PVG, WEG, PEV, and RHG are derived. Further, a model validation is presented, considering comparative test results from the state of the art of the UCF minimum, developed in a previous study, to the minimum reached with the presented (MUCF) solution. 
    more » « less
  2. null (Ed.)
    Smart microgrids (SMGs) may face energy rationing due to unavailability of energy resources. Demand response (DR) in SMGs is useful not only in emergencies, since load cuts might be planned with a reduction in consumption but also in normal operation. SMG energy resources include storage systems, dispatchable units, and resources with uncertainty, such as residential demand, renewable generation, electric vehicle traffic, and electricity markets. An aggregator can optimize the scheduling of these resources, however, load demand can completely curtail until being neglected to increase the profits. The DR function (DRF) is developed as a constraint of minimum size to supply the demand and contributes solving of the 0-1 knapsack problem (KP), which involves a combinatorial optimization. The 0-1 KP stores limited energy capacity and is successful in disconnecting loads. Both constraints, the 0-1 KP and DRF, are compared in the ranking index, load reduction percentage, and execution time. Both functions turn out to be very similar according to the performance of these indicators, unlike the ranking index, in which the DRF has better performance. The DRF reduces to 25% the minimum demand to avoid non-optimal situations, such as non-supplying the demand and has potential benefits, such as the elimination of finite combinations and easy implementation. 
    more » « less
  3. null (Ed.)
    Contingency Constrained Optimal Power Flow (CCOPF) differs from traditional Optimal Power Flow (OPF) because its generation dispatch is planned to work with state variables between constraint limits, considering a specific contingency. When it is not desired to have changes in the power dispatch after the contingency occurs, the CCOPF is studied with a preventive perspective, whereas when the contingency occurs and the power dispatch needs to change to operate the system between limits in the post-contingency state, the problem is studied with a corrective perspective. As current power system software tools mainly focus on the traditional OPF problem, having the means to solve CCOPF will benefit power systems planning and operation. This paper presents a Quadratically Constrained Quadratic Programming (QCQP) formulation built within the matpower environment as a solution strategy to the preventive CCOPF. Moreover, an extended OPF model that forces the network to meet all constraints under contingency is proposed as a strategy to find the power dispatch solution for the corrective CCOPF. Validation is made on the IEEE 14-bus test system including photovoltaic generation in one simulation case. It was found that in the QCQP formulation, the power dispatch calculated barely differs in both pre- and post-contingency scenarios while in the OPF extended power network, node voltage values in both pre- and post-contingency scenarios are equal in spite of having different power dispatch for each scenario. This suggests that both the QCQP and the extended OPF formulations proposed, could be implemented in power system software tools in order to solve CCOPF problems from a preventive or corrective perspective. 
    more » « less
  4. Traditional high performance computing (HPC) centers that operate a single large supercomputer cluster have not required sophisticated mechanisms to manage and enforce network policies. Recently, HPC centers have expanded to support a wide range of computational infrastructure, such as OpenStack-based private clouds and Ceph object stores, each with its own unique characteristics and network security requirements. Network security policies are becoming more complex and harder to manage. To address the challenge, this paper explores ways to define and manage the new network policies required by emerging HPC systems. As the first step, we identify the new types of policies that are required and the technical capabilities needed to support them. We present example policies and discuss ways to implement those policies using emerging programmable networks and intent-based networks. We describe our initial work toward automatically converting human readable network policies into network configurations and programmable network controllers that implement those policies using business rule management systems. 
    more » « less
  5. A key concept of software-defined networking (SDN) is separation of the control and data plane. This idea provides several benefits, including fine-grained network control and monitoring, and the ability to deploy new services in a limited scope. Unfortunately, it is often cost-prohibitive for enterprises (and universities in particular) to upgrade their existing networks to wholly SDN-capable networks all at once. A compromise solution is to deploy SDN capabilities incrementally in the network. The challenge then is to take full advantage of SDN-based services throughout the network, in an integrated fashion rather than in a few "islands" of SDN support. At the University of Kentucky, SDN has been integrated into the campus network for several years. In this paper, we describe two aspects of this challenge, along with our solution approaches. One is the general reluctance of campus network administrations to allow novel or experimental (SDN-based) services in the production network. The other is how to extend such services throughout the legacy part of the network. For the former, we lay out a set of principles designed to ensure that the production service is not harmed. For the latter, we use policy based routing and a graph database to extend our previously-described VIP Lanes service. Our simulation results in a campus-like topology testbed show that we can provide a host with custom path service even if it is connected to a legacy router. 
    more » « less
  6. HPC networks and campus networks are beginning to leverage various levels of network programmability ranging from programmable network configuration (e.g., NETCONF/YANG, SNMP, OF-CONFIG) to software-based controllers (e.g., OpenFlow Controllers) to dynamic function placement via network function virtualization (NFV). While programmable networks offer new capabilities, they also make the network more difficult to debug. When applications experience unexpected network behavior, there is no established method to investigate the cause in a programmable network and many of the conventional troubleshooting debugging tools (e.g., ping and traceroute) can turn out to be completely useless. This absence of troubleshooting tools that support programmability is a serious challenge for researchers trying to understand the root cause of their networking problems. This paper explores the challenges of debugging an all-campus science DMZ network that leverages SDN-based network paths for high-performance flows. We propose Flow Tracer, a light-weight, data-plane-based debugging tool for SDN-enabled networks that allows end users to dynamically discover how the network is handling their packets. In particular, we focus on solving the problem of identifying an SDN path by using actual packets from the flow being analyzed as opposed to existing expensive approaches where either probe packets are injected into the network or actual packets are duplicated for tracing purposes. Our simulation experiments show that Flow Tracer has negligible impact on the performance of monitored flows. Moreover, our tool can be extended to obtain further information about the actual switch behavior, topology, and other flow information without privileged access to the SDN control plane. 
    more » « less
  7. Network security devices intercept, analyze and act on the traffic moving through the network to enforce security policies. They can have adverse impact on the performance, functionality, and privacy provided by the network. To address this issue, we propose a new approach to network security based on the concept of short-term on-demand security exceptions. The basic idea is to bring network providers and (trusted) users together by (1) implementing coarse-grained security policies in the traditional way using conventional in-band security approaches, and (2) handling special cases policy exceptions in the control plane using user/application-supplied information. By divulging their intent to network providers, trusted users can receive better service. By allowing security exceptions, network providers can focus inspections on general (untrusted) traffic. We describe the design of an on-demand security exception mechanism and demonstrate its utility using a prototype implementation that enables high-speed big-data transfer across campus networks. Our experiments show that the security exception mechanism can improve the throughput of flows by trusted users significantly. 
    more » « less
  8. Existing campus network infrastructure is not designed to effectively handle the transmission of big data sets. Performance degradation in these networks is often caused by middleboxes -- appliances that enforce campus-wide policies by deeply inspecting all traffic going through the network (including big data transmissions). We are developing a Software-Defined Networking (SDN) solution for our campus network that grants privilege to science flows by dynamically calculating routes that bypass certain middleboxes to avoid the bottlenecks they create. Using the global network information provided by an SDN controller, we are developing graph databases approaches to compute custom paths that not only bypass middleboxes to achieve certain requirements (e.g., latency, bandwidth, hop-count) but also insert rules that modify packets hop-by-hop to create the illusion of standard routing/forward despite the fact that packets are being rerouted. In some cases, additional functionality needs to be added to the path using network function virtualization (NFV) techniques (e.g., NAT). To ensure that path computations are run on an up-to-date snapshot of the topology, we introduce a versioning mechanism that allows for lazy topology updates that occur only when "important" network changes take place and are requested by big data flows. 
    more » « less
  9. The emergence of big data has created new challenges for researchers transmitting big data sets across campus networks to local (HPC) cloud resources, or over wide area networks to public cloud services. Unlike conventional HPC systems where the network is carefully architected (e.g., a high speed local interconnect, or a wide area connection between Data Transfer Nodes), today's big data communication often occurs over shared network infrastructures with many external and uncontrolled factors influencing performance. This paper describes our efforts to understand and characterize the performance of various big data transfer tools such as rclone, cyberduck, and other provider-specific CLI tools when moving data to/from public and private cloud resources. We analyze the various parameter settings available on each of these tools and their impact on performance. Our experimental results give insights into the performance of cloud providers and transfer tools, and provide guidance for parameter settings when using cloud transfer tools. We also explore performance when coming from HPC DTN nodes as well as researcher machines located deep in the campus network, and show that emerging SDN approaches such as the VIP Lanes system can deliver excellent performance even from researchers' machines. 
    more » « less